Benchmarking is the quantitative method most commonly used when managers contemplate procuring a large business information system. It consists of running a group of representative applications on the systems offered by vendors to validate their claims. The implementation of benchmarking can be very costly, as users need to convert, run, and test applications on several partially compatible computer systems. Benchmarking works well in modern database management systems (DBMS)-oriented applications because the system performance is more a function of the database structure and activities than of the complexity of the application code. Earlier research focused primarily on designing various benchmarks for database systems; the decision problem associated with finding an optimal mix of benchmarks has largely been overlooked. In this paper, we examine the problem of defining the most economical process for generating and evaluating the appropriate mix of benchmarks to be used across the contending information systems. Our analytical approach considers information-gathering priorities, acquisition and execution costs, resource consumption, and overall time requirements. We present a multiobjective decision-making approach for deriving the optimal mix of benchmarks; this approach reflects the major organizational objectives in more than simple one-dimensional numerical terms. A practical example illustrates the utility of this approach for evaluating a client-server relational database system.